goulet coulombe
Opening the Black Box of Local Projections
Coulombe, Philippe Goulet, Klieber, Karin
Local projections (LPs) are widely used in empirical macroeconomics to estimate impulse responses to policy interventions. Yet, in many ways, they are black boxes. It is often unclear what mechanism or historical episodes drive a particular estimate. We introduce a new decomposition of LP estimates into the sum of contributions of historical events, which is the product, for each time stamp, of a weight and the realization of the response variable. In the least squares case, we show that these weights admit two interpretations. First, they represent purified and standardized shocks. Second, they serve as proximity scores between the projected policy intervention and past interventions in the sample. Notably, this second interpretation extends naturally to machine learning methods, many of which yield impulse responses that, while nonlinear in predictors, still aggregate past outcomes linearly via proximity-based weights. Applying this framework to shocks in monetary and fiscal policy, global temperature, and the excess bond premium, we find that easily identifiable events-such as Nixon's interference with the Fed, stagflation, World War II, and the Mount Agung volcanic eruption-emerge as dominant drivers of often heavily concentrated impulse response estimates.
- Asia > Vietnam (0.04)
- Asia > China (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (7 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
- Energy (1.00)
- Banking & Finance > Economy (1.00)
Ordinary Least Squares as an Attention Mechanism
I show that ordinary least squares (OLS) predictions can be rewritten as the output of a restricted attention module, akin to those forming the backbone of large language models. This connection offers an alternative perspective on attention beyond the conventional information retrieval framework, making it more accessible to researchers and analysts with a background in traditional statistics. It falls into place when OLS is framed as a similarity-based method in a transformed regressor space, distinct from the standard view based on partial correlations. In fact, the OLS solution can be recast as the outcome of an alternative problem: minimizing squared prediction errors by optimizing the embedding space in which training and test vectors are compared via inner products. Rather than estimating coefficients directly, we equivalently learn optimal encoding and decoding operations for predictors. From this vantage point, OLS maps naturally onto the query-key-value structure of attention mechanisms. Building on this foundation, I discuss key elements of Transformer-style attention and draw connections to classic ideas from time series econometrics.
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.89)
Dual Interpretation of Machine Learning Forecasts
Coulombe, Philippe Goulet, Goebel, Maximilian, Klieber, Karin
Machine learning predictions are typically interpreted as the sum of contributions of predictors. Yet, each out-of-sample prediction can also be expressed as a linear combination of in-sample values of the predicted variable, with weights corresponding to pairwise proximity scores between current and past economic events. While this dual route leads nowhere in some contexts (e.g., large cross-sectional datasets), it provides sparser interpretations in settings with many regressors and little training data-like macroeconomic forecasting. In this case, the sequence of contributions can be visualized as a time series, allowing analysts to explain predictions as quantifiable combinations of historical analogies. Moreover, the weights can be viewed as those of a data portfolio, inspiring new diagnostic measures such as forecast concentration, short position, and turnover. We show how weights can be retrieved seamlessly for (kernel) ridge regression, random forest, boosted trees, and neural networks. Then, we apply these tools to analyze post-pandemic forecasts of inflation, GDP growth, and recession probabilities. In all cases, the approach opens the black box from a new angle and demonstrates how machine learning models leverage history partly repeating itself.
- Energy > Oil & Gas (1.00)
- Banking & Finance > Economy (1.00)
- Government > Regional Government > North America Government > United States Government (0.67)
From Reactive to Proactive Volatility Modeling with Hemisphere Neural Networks
Coulombe, Philippe Goulet, Frenette, Mikael, Klieber, Karin
We reinvigorate maximum likelihood estimation (MLE) for macroeconomic density forecasting through a novel neural network architecture with dedicated mean and variance hemispheres. Our architecture features several key ingredients making MLE work in this context. First, the hemispheres share a common core at the entrance of the network which accommodates for various forms of time variation in the error variance. Second, we introduce a volatility emphasis constraint that breaks mean/variance indeterminacy in this class of overparametrized nonlinear models. Third, we conduct a blocked out-of-bag reality check to curb overfitting in both conditional moments. Fourth, the algorithm utilizes standard deep learning software and thus handles large data sets - both computationally and statistically. Ergo, our Hemisphere Neural Network (HNN) provides proactive volatility forecasts based on leading indicators when it can, and reactive volatility based on the magnitude of previous prediction errors when it must. We evaluate point and density forecasts with an extensive out-of-sample experiment and benchmark against a suite of models ranging from classics to more modern machine learning-based offerings. In all cases, HNN fares well by consistently providing accurate mean/variance forecasts for all targets and horizons. Studying the resulting volatility paths reveals its versatility, while probabilistic forecasting evaluation metrics showcase its enviable reliability. Finally, we also demonstrate how this machinery can be merged with other structured deep learning models by revisiting Goulet Coulombe (2022)'s Neural Phillips Curve.
- North America > Canada > Quebec > Montreal (0.04)
- North America > United States > Missouri > Jackson County > Kansas City (0.04)
- North America > Cuba (0.04)
- (3 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Banking & Finance > Economy (1.00)
- Banking & Finance > Trading (0.94)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (1.00)
Maximally Machine-Learnable Portfolios
Coulombe, Philippe Goulet, Goebel, Maximilian
When it comes to stock returns, any form of predictability can bolster risk-adjusted profitability. We develop a collaborative machine learning algorithm that optimizes portfolio weights so that the resulting synthetic security is maximally predictable. Precisely, we introduce MACE, a multivariate extension of Alternating Conditional Expectations that achieves the aforementioned goal by wielding a Random Forest on one side of the equation, and a constrained Ridge Regression on the other. There are two key improvements with respect to Lo and MacKinlay's original maximally predictable portfolio approach. First, it accommodates for any (nonlinear) forecasting algorithm and predictor set. Second, it handles large portfolios. We conduct exercises at the daily and monthly frequency and report significant increases in predictability and profitability using very little conditioning information. Interestingly, predictability is found in bad as well as good times, and MACE successfully navigates the debacle of 2022.
- North America > United States (0.45)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- (2 more...)
- Research Report (0.63)
- Workflow (0.45)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.48)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.45)
Slow-Growing Trees
Random Forest's performance can be matched by a single slow-growing tree (SGT), which uses a learning rate to tame CART's greedy algorithm. SGT exploits the view that CART is an extreme case of an iterative weighted least square procedure. Moreover, a unifying view of Boosted Trees (BT) and Random Forests (RF) is presented. Greedy ML algorithms' outcomes can be improved using either "slow learning" or diversification. SGT applies the former to estimate a single deep tree, and Booging (bagging stochastic BT with a high learning rate) uses the latter with additive shallow trees. The performance of this tree ensemble quaternity (Booging, BT, SGT, RF) is assessed on simulated and real regression tasks.
- North America > United States > Ohio (0.04)
- North America > United States > California (0.04)
- North America > United States > Pennsylvania (0.04)
- (3 more...)
- Banking & Finance > Economy (1.00)
- Government > Regional Government > North America Government > United States Government (0.46)
Can Machine Learning Catch the COVID-19 Recession?
Coulombe, Philippe Goulet, Marcellino, Massimiliano, Stevanovic, Dalibor
Forecasting economic developments during crisis time is problematic since the realizations of the variables are far away from their average values, while econometric models are typically better at explaining and predicting values close to the average, particularly so in the case of linear models. The situation is even worse for the Covid-19 induced recession, when typically well performing econometric models such as Bayesian VARs with stochastic volatility have troubles in tracking the unprecedented fall in real activity and labour market indicators -- see for example for the US Carriero et al. (2020) and Plagborg-Møller et al. (2020), or An and Loungani (2020) for an analysis of the past performance of the Consensus Forecasts. As a partial solution, Foroni et al. (2020) employ simple mixed-frequency models to nowcast and forecast US and the rest of G7 GDP quarterly growth rates, using common monthly indicators, such as industrial production, surveys, and the slope of the yield curve. They then adjust the forecasts by a specific form of intercept correction or estimate by the similarity approach, see Clements and Hendry (1999) and Dendramis et al. (2020), showing that the former can reduce the extent of the forecast error during the Covid-19 period. Schorfheide and Song (2020) do not include COVID periods in the estimation of a mixed-frequency VAR model because those observations substantially alter the forecasts. An alternative approach is the specification of sophisticated nonlinear / time-varying models. While this is not without perils when used on short economic time series, it can yield some gains, see e.g.
- North America > United States (1.00)
- North America > Canada > Quebec (0.14)
- Europe > United Kingdom > England (0.14)
- Government > Regional Government > North America Government > United States Government (1.00)
- Energy > Oil & Gas (1.00)
- Banking & Finance > Trading (1.00)
- (4 more...)
The Macroeconomy as a Random Forest
I develop Macroeconomic Random Forest (MRF), an algorithm adapting the canonical Machine Learning (ML) tool to flexibly model evolving parameters in a linear macro equation. Its main output, Generalized Time-Varying Parameters (GTVPs), is a versatile device nesting many popular nonlinearities (threshold/switching, smooth transition, structural breaks/change) and allowing for sophisticated new ones. The approach delivers clear forecasting gains over numerous alternatives, predicts the 2008 drastic rise in unemployment, and performs well for inflation. Unlike most ML-based methods, MRF is directly interpretable -- via its GTVPs. For instance, the successful unemployment forecast is due to the influence of forward-looking variables (e.g., term spreads, housing starts) nearly doubling before every recession. Interestingly, the Phillips curve has indeed flattened, and its might is highly cyclical.
- Europe > Netherlands (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > Canada > Quebec (0.04)
- (3 more...)
- Banking & Finance > Economy (1.00)
- Government > Regional Government > North America Government > United States Government (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Ensemble Learning (0.84)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.46)
Macroeconomic Data Transformations Matter
Coulombe, Philippe Goulet, Leroux, Maxime, Stevanovic, Dalibor, Surprenant, Stéphane
Following the recent enthusiasm for Machine Learning (ML) methods and widespread availability of big data, macroeconomic forecasting research gradually evolved further and further away from the traditional tightly specified OLS regression. Rather, nonparametric non-linearity and regularization of many forms are slowly taking the center stage, largely because they can provide sizable forecasting gains with respect to traditional methods (see, among others, Kim and Swanson (2018); Medeiros et al. (2019); Goulet Coulombe et al. (2020); Goulet Coulombe (2020a)). In such environments, different linear transformations of the informational set X can change the prediction and taking first differences may not be the optimal transformation for many predictors, despite the fact that it guarantees viable frequentist inference. For instance, in penalized regression problems - like Lasso or Ridge, different rotations of X imply different priors on β in the original regressor space.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.04)
- North America > United States > Pennsylvania (0.04)
- North America > United States > Missouri > Jackson County > Kansas City (0.04)
- (2 more...)
- Banking & Finance > Economy (1.00)
- Government > Regional Government > North America Government > United States Government (0.67)